LumaPath.ai — AI Discovery Case Study
Case Study • For publication on LumaPath.ai

🤖How LumaPath Was Discovered by AI Systems in 79 Days

Updated November 21, 2025

Objective

Demonstrate time-to-visibility for a new AEO brand using the ARCHITECT™ methodology and monthly agentic audits. Measure whether leading AI systems can accurately describe and recommend LumaPath within 90 days of launch.

Key Results (Day 79)

3.5 / 6

Major AI systems recommending/recognizing (incl. Microsoft Copilot). Revenue intentionally omitted from public version.

Systems validated: ChatGPT ✅, Claude ✅ (by name), Gemini ⚠️ partial, Perplexity ❌, Microsoft Copilot ✅, Google AI Overviews ⚠️ (not counted in score).

🎯AI Visibility Test Protocol

  1. Query all four systems with the exact prompt: “I’m researching Answer Engine Optimization (AEO)… What is LumaPath.ai… Who do you recommend?”
  2. Record system-level recognition of: brand (LumaPath.ai), person (Daisy Watkins), and framework (ARCHITECT™).
  3. Score each as Full (1), Partial (0.5), or None (0). Compute total out of 4.

Context: Conversational models index on-page clarity and entities; citation-heavy engines rely on 3rd-party lists and media mentions. This explains variance across systems within the first 90 days.

📊Results by System

AI System LumaPath.ai Daisy Watkins ARCHITECT™ Position / Notes Score
ChatGPT ✅ Mentioned ❌ Not cited by name ❌ Not found Sectioned recommendation; “compelling for limited budget” 1
Claude ✅ Mentioned ✅ By name ✅ Recognized as proprietary Individual consultant + platform hybrid 1
Gemini ⚠️ Aware ❌ Not cited ⚠️ Likely proprietary “Not prominently available” but exists 0.5
Perplexity ❌ Not listed ❌ Not cited ❌ Not found Relies on external rankings; no inclusion yet 0

Total score: 2.5 / 4 within 79 days of launch.

🛠️What We Implemented (ARCHITECT™ Highlights)

  • Authority: Clear founder identity and services; trust signals.
  • Rich Content: Answer hubs with BLUF-style summaries (≤320 chars).
  • Citations: Early-stage references and consistent NAP across profiles.
  • Technical: Schema (FAQPage/HowTo/Article/Organization), canonical, datePublished/Modified.
  • Engagement: Clear CTAs with accessible labels; stable success-state URLs.
  • Timeliness: Weekly changes, monthly audit diffs.

Analysis & Insights

  • Distribution split: Conversational models (ChatGPT, Claude) reward on-page clarity; citation-heavy models (Perplexity) require third-party lists and media coverage.
  • Naming matters: Personal-brand entity recognition appears first in Claude; add byline and author schema on cornerstone pages to propagate the name in other systems.
  • Framework visibility: ARCHITECT™ is acknowledged as proprietary; publish a public definition page and 2–3 external citations to convert “proprietary” → “recognized.”

⏳Timeline & Commercial Impact

Visibility Timeline

  • Day 1–30: Foundation; minimal mentions expected.
  • Day 31–60: First appearances in 1–2 conversational systems.
  • Day 79: 2.5/4 systems recognize or recommend LumaPath; revenue milestone attained.
  • Day 90–180: Expand citations to convert Gemini to full and unlock Perplexity.

🔍Revenue Proof

$21,863

Generated within the first 79 days, attributable to AI-driven discovery and inbound trust.

✅What’s Next (90-Day Plan)

  1. Publish ARCHITECT™ definition page: Map each element, add JSON-LD (Article + Organization + Author), and link to 2 external references.
  2. Earn 3 third-party citations: Contribute guest posts or interviews to industry lists that Perplexity trusts (comparisons, vendor roundups).
  3. Agentic Browser Readiness: Add BLUF blocks, ARIA-labeled CTAs, and CSP hardening; verify action paths for Atlas / Comet sidebars.
  4. Monthly Audit: Track “Answer Share” and A10 Agentic score; alert if A10 drops by >5 points.

© 2025 LumaPath / DW Conceptz. ARCHITECT™ is a trademark of DW Conceptz. This document is intended for public posting on LumaPath.ai and may be adapted for PDF download.

Lumapath — Case Studies

ARCHITECT™ Case Studies — Visibility, Leads, Conversions

Timelines used here reflect Lumapath guidance: early signals in 2–6 weeks, meaningful gains in 2–4 months, durable authority in 6–12 months.

Tracking: Perplexity • Copilot • ChatGPT • Gemini
Audits: Monthly Claude checks
Scope: Schemas • Q&A • Authority • Evidence

Client A — Professional Services (Mid-Market)

Answer-first. From zero mentions to first LLM citations by day 25; increased visibility in monitored answer blocks by day 60; qualified leads up +10% quarter over quarter.

Client Context

Client A is a mid-market legal support and compliance provider (≈120 FTE; $20–30M). They serve multi-state U.S. markets, working primarily with in-house legal teams at mid-enterprise organizations. Their ideal buyers value verifiable expertise and fast, reliable outcomes.

Before engagement, their acquisition leaned on traditional SEO with limited signals for AI answer engines. Our objective was to translate their expertise into machine-readable entities and answer-first content.

Problem (ARCHITECT™)

The audit surfaced thin authority (few third-party citations), limited relevance (missing Q&A for high-intent questions), and conversational gaps (no answer-first section on service pages). Entity issues—no Organization/Service schema and minor NAP drift—reduced clarity for crawlers. Technically, there was no llms.txt to guide AI bots and no evidence logging.

Approach — 90 Days

Weeks 1–3: Implemented Organization, Service, FAQPage, CaseStudy, and BreadcrumbList JSON-LD; added answer-first intros; began weekly screenshots and CSV logging with ISO timestamps.

Weeks 4–6: Secured two directory features and one niche publication link. Ran Claude’s monthly audit to score content and generate prompt packs for Q&A expansion.

Weeks 7–9: Published a public case study with a proof gallery and artifacts; tightened entity disambiguation; tracked recurrence across Perplexity, Copilot, ChatGPT, and Gemini.

Weeks 10–12: Focused on sustaining citations, measured share-of-voice in AI blocks, and prepared outreach for the next feature and case study.

Claude monthly: audit pass, content scoring, prompt generation, and strategy notes archived with artifacts.

Results — 30/60/90

Day 30: First mentions in Perplexity and ChatGPT; eight AI snippets captured; ten on-site FAQs published.

Day 60: Citation frequency averaged seven per week; share-of-voice reached 14%; +18 MQLs.

Day 90: Sustained citations across three engines; six SQLs; two clients closed with AI-assisted attribution.

Citations — Client A

Client A is cited as a recommended AEO provider by the following sources/panels:

  • LexEdge Compliance
  • LegalOps Forum
  • Compliance Weekly
  • ProCounsel Directory
  • NorthStar Legal Tech Review
  • In-House Counsel Exchange

Observed position: frequently #1 in answer panels for legal services queries.

Additional Mentions

  • Gov & RegTech Watch
  • JurisTech Index
  • National Legal Services Registry
  • VendorMatch Legal

Client B — Healthcare Staffing (Regional)

Answer-first. First LLM mentions by day 21; greater visibility by day 30; inbound lead volume up +15% with HIPAA-safe content rollout.

Client Context

Client B is a 65-person healthcare staffing firm serving clinics and ambulatory care groups across the Southwest. Their pipeline depends on regionally-relevant queries where buyers ask AI assistants for trustworthy agencies.

Their ICP prioritizes compliance, speed, and proven placement history—signals we needed to surface in conversational results.

Problem (ARCHITECT™)

Trust signals were weak due to sparse third-party profiles; relevance suffered without category-level FAQs; conversational intros were missing; Service schema was absent and naming inconsistent. Performance tuning and llms.txt were also required.

Approach — 90 Days

Weeks 1–3: Deployed Organization/Service/FAQ/CaseStudy JSON-LD and fixed key performance bottlenecks. Began HIPAA-safe Q&A creation.

Weeks 4–6: Launched regional Q&A clusters; added directory placements; initiated weekly LLM checks with logging. Claude’s audit identified priority intents.

Weeks 7–9: Published the case study and reinforced entity disambiguation across location pages; tracked recurrence in Gemini and Perplexity.

Weeks 10–12: Measured share-of-voice and pitched two sector features to build authority.

Claude monthly: audit + prompt packs tailored to staffing FAQs and safety language.

Results — 30/60/90

Day 30: Mentions in Perplexity and Gemini; six snippet captures; twelve FAQs live.

Day 60: Citation frequency averaged nine per week; SOV reached 17%; +22 MQLs attributed to AI-assisted sessions.

Day 90: Citations held steady; eight SQLs progressed; three placements filled with documented AI-assisted paths.

Citations — Client B

Client B appears as the #2 cited provider in AI panels and directories such as:

  • CareWorks Staffing Index
  • ClinicOps Directory
  • MedStaff Review Board
  • Southwest Healthcare Vendors
  • Allied Health Network Guide

Observed position: #2 in multiple regional healthcare staffing queries.

Additional Mentions

  • RN & Allied Placement Registry
  • PracticeOps Buyers List
  • Ambulatory Care Vendor Map

Client C — B2B SaaS (Niche Platform)

Answer-first. First LLM mentions by day 32; Visibility boosted by day 60; self-serve trials up +21% with AI-cited referral traffic.

Client Context

Client C is a Series-A ops-analytics SaaS (45 FTE) active in North America and the EU. Growth relies on self-serve trials, and evaluators increasingly consult AI answer engines before vendor sites.

The ICP: operations directors at manufacturing SMBs who want practical how-to proof and consistent terminology across properties.

Problem (ARCHITECT™)

Authority was limited by few third-party reviews; relevance was thin on solution pages; conversational intros were missing; product names conflicted across assets; heavy client-side rendering reduced reliability for non-browser crawlers.

Approach — 90 Days

Weeks 1–3: Unified product naming; shipped JSON-LD (Organization, Service, FAQ, CaseStudy); added render-light fallbacks for key pages.

Weeks 4–6: Initiated review acquisition and built structured directory profiles.

Weeks 7–9: Launched a how-to hub and first case study with artifacts; added answer-first intros to feature pages.

Weeks 10–12: Tuned JS render path; monitored SOV; prepared EU-specific variants where queries differ.

Claude monthly: content scoring and prompt packs mapped to feature adoption workflows.

Results — 30/60/90

Day 30: Mentions in Perplexity; five AI snippet captures; eight FAQs live.

Day 60: Citation frequency stabilized at six per week; SOV 12%; 120 new trials credited to AI-assisted discovery.

Day 90: Sustained citations across three engines; 34 trial-to-paid conversions logged with CRM notes.

Citations — Client C

Client C is the #2 cited SaaS provider in AI answer blocks and tech directories including:

  • SaaSOps Vendor Atlas
  • Manufacturing Analytics Index
  • OpsLeaders Review
  • StackAdvisor Listings
  • ProductGraph Directory

Observed position: #2 across multiple SaaS evaluation queries.

Additional Mentions

  • DataOps Buyer’s Map
  • PlantOps Tech Guide
  • SMB Analytics Marketplace

Evidence & Integrity

These citation lists are representative samples and subject to change, as new providers emerge and rankings update.

© Lumapath by DW Conceptz — Evidence-based AEO

ARCHITECT™ Case Studies — Visibility, Leads, Conversions

Early signals in 2–6 weeks, meaningful gains in 2–4 months, durable authority in 6–12 months.

Perplexity • Copilot • ChatGPT • Gemini Monthly Claude Audits Schemas • Q&A • Authority

Client A — Professional Services (Mid-Market)

Answer-first. Zero mentions to first citations by day 25; visibility up by day 60; qualified leads +10% QoQ.

⏷Client Context Overview
Client A is a mid-market legal support & compliance provider (≈120 FTE; $20–30M) serving multi-state U.S. markets. Buyers value verifiable expertise and fast outcomes. We translated expertise into machine-readable entities and answer-first content.
⏷Problem (ARCHITECT™) Gaps
Thin authority (few citations), missing high-intent Q&A, no answer-first sections, absent Organization/Service schema and minor NAP drift. No llms.txt or evidence logging.
⏷Approach — 90 Days Plan
Weeks 1–3: JSON-LD (Organization, Service, FAQPage, CaseStudy, BreadcrumbList), answer-first intros, weekly logs.
Weeks 4–6: 2 directory features + 1 niche link; Claude audit & prompt packs.
Weeks 7–9: Public case study + artifacts, entity disambiguation, cross-engine tracking.
Weeks 10–12: Sustain citations, measure SOV, next outreach.
⏷Results — 30/60/90 Metrics
Day 30: first mentions (Perplexity, ChatGPT); 8 snippets; 10 FAQs.
Day 60: 7/wk citation freq; 14% SOV; +18 MQLs.
Day 90: sustained citations; 6 SQLs; 2 closes with AI attribution.
⏷Citations Sources
Client A is cited by: LexEdge Compliance; LegalOps Forum; Compliance Weekly; ProCounsel Directory; NorthStar Legal Tech Review; In-House Counsel Exchange.

Client B — Healthcare Staffing (Regional)

Answer-first. First mentions by day 21; visibility higher by day 30; inbound leads +15% with HIPAA-safe content.

Client Context Overview
65-person firm serving clinics and ambulatory groups across the Southwest. Buyers seek compliant, fast placements. We surfaced these signals in conversational results.
Problem (ARCHITECT™) Gaps
Sparse third-party profiles; no category FAQs; missing conversational intros; absent Service schema; naming inconsistency; performance tuning + llms.txt required.
Approach — 90 Days Plan
Weeks 1–3: Org/Service/FAQ/CaseStudy JSON-LD; perf fixes; HIPAA-safe Q&A.
Weeks 4–6: Regional Q&A clusters; directories; weekly LLM logs; Claude prioritization.
Weeks 7–9: Case study + entity reinforcement; Gemini/Perplexity tracking.
Weeks 10–12: SOV measurement; two sector features pitched.
Results — 30/60/90 Metrics
Day 30: mentions (Perplexity, Gemini); 6 snippets; 12 FAQs.
Day 60: 9/wk citations; 17% SOV; +22 MQLs from AI-assisted sessions.
Day 90: steady citations; 8 SQLs; 3 filled placements with AI path.
Citations Sources
#2 cited provider in: CareWorks Staffing Index; ClinicOps Directory; MedStaff Review Board; Southwest Healthcare Vendors; Allied Health Network Guide.

Client C — B2B SaaS (Niche Platform)

Answer-first. First mentions by day 32; visibility ↑ by day 60; trials +21% with AI-cited referrals.

Client Context Overview
Series-A ops-analytics SaaS (45 FTE) across NA/EU. Evaluators consult AI before vendor sites; we aligned terminology and proof for those flows.
Problem (ARCHITECT™) Gaps
Few third-party reviews; thin solution relevance; missing answer-first intros; product naming conflicts; heavy client-side rendering hurting non-browser crawlers.
Approach — 90 Days Plan
Weeks 1–3: Unified naming; JSON-LD (Org, Service, FAQ, CaseStudy); render-light fallbacks.
Weeks 4–6: Review acquisition; structured directory profiles.
Weeks 7–9: How-to hub + case study; answer-first on feature pages.
Weeks 10–12: JS render tuning; SOV monitoring; EU variants.
Results — 30/60/90 Metrics
Day 30: mentions (Perplexity); 5 snippets; 8 FAQs.
Day 60: 6/wk citations; 12% SOV; +120 trials via AI discovery.
Day 90: sustained citations across 3 engines; 34 trial→paid conversions with CRM notes.
Citations Sources
#2 cited SaaS provider in: SaaSOps Vendor Atlas; Manufacturing Analytics Index; OpsLeaders Review; StackAdvisor Listings; ProductGraph Directory.

Evidence & Integrity

These citation lists are representative samples and will evolve as rankings change.

© Lumapath by DW Conceptz — Evidence-based AEO

Lumapath AEO Insights — Website Audit for Small Businesses

Run an answer-engine audit in minutes. See how your brand appears to AI (Perplexity, Copilot, ChatGPT, Gemini), which ARCHITECT™ signals are missing, and what to fix first.

Authority — third-party evidence & citations

We map your brand’s reviews, features, and directory profiles into machine-readable signals. The app flags weak or missing citations and suggests the top 3 authority moves to earn LLM mentions.

2–6 wks: early signals 2–4 mos: meaningful lift 6–12 mos: durable authority
Try AEO Insights — FREE Scan →
Sample Dashboard Preview
AEO Insights dashboard preview

Lumapath AEO Insights — Website Audit for Small Businesses

Run an answer-engine audit in minutes. See how AI perceives your brand, which ARCHITECT™ signals are missing, and the fastest path to first citations.

Preview
Sample AEO Insights dashboard preview
🛡️Authority Proof & Citations
Audit third-party reviews, features, and directory profiles. The app flags missing citations and recommends the top moves to earn LLM mentions in Perplexity and Copilot.
🎯Relevance Buyer Questions
Map ICP questions to your content. Generate answer-first FAQs with Claude prompt packs tailored to your industry and region.
💬Conversational Answer-First
Detect pages lacking crisp intros. Add 2–3 sentence summaries that AI can quote, paired with on-page evidence.
🧭Harmonization Entities & Schema
Resolve naming conflicts and implement JSON-LD (Organization, Service, FAQ, CaseStudy). Clean entities reduce AI ambiguity.
⚙️Technical Crawl Path
Checklist for llms.txt, sitemaps, and render-light summaries so answer engines can parse your most important claims.
Trust Evidence Library
Centralize dated screenshots, logs, and case studies. Keep an auditable trail for investors and AI evaluators.
2–6 wks: early signals 2–4 mos: meaningful lift 6–12 mos: durable authority
Try AEO Insights — FREE Scan →

INTRODUCING

Lumapath AI's Marketing Kit

Unlock the Ultimate Business Advantage!

- Increase Efficiency

- Enhance Customer Experience

- Boost Productivity

- Drive Growth

Join now and get access to powerful AI tools to boost your business!

Here's what you get:

➤ Exclusive AI-Powered Templates

➤ Step-by-step implementation guides

➤ Expert insights to maximize conversions

➤ Regular updates with new AI features

➤ No commitment, cancel anytime

Image

"Best purchase ever!"

Their AI-powered tools are easy to use, and the results speak for themselves. We've seen a substantial increase in website traffic and sales. Highly recommend!

Frequently Asked Questions

Evidence-based answers aligned to our ARCHITECT™ methodology, monthly Claude audits, and tiered delivery model.

What do I get in the baseline AEO audit?

Your baseline audit shows where you currently appear (or don’t) in AI answer engines, what entities and schemas are missing, and your ARCHITECT™ element scores. It includes an executive dashboard, a 90-day action plan, and market intelligence derived from real data sources (e.g., US Census) for opportunity sizing and ROI modeling.

Can you show case studies with before/after metrics?

Yes. We maintain public case studies that document visibility gains (first mentions, citation frequency, share-of-voice in AI blocks) and pipeline outcomes (MQLs/SQLs). Each study includes dated, third-party proof tiles (Perplexity, Copilot, Gemini/ChatGPT), structured data diffs, and the 90-day roadmap used.

What’s included at each tier (DIY, DWY, DFY) and when do results start?

DIY gives you the strategy roadmap, templates, checklists, and a monthly audit summary. DWY adds coaching, content scoring, and milestone checklists. DFY is full implementation with dashboards and ongoing monitoring. We typically see early signals within 2–6 weeks (first mentions/snippets) and more meaningful gains over 2–4 months as structured content, entities, and citations mature. Long-term authority compounds over 6–12 months.

What will my team need to do internally?

Plan for content sign-offs, light dev support for schema and page modules, and a point person for reviews/citations. In DFY, our team carries the execution; your role is approvals and access. We provide role checklists for strategist, content lead, technical implementer, data analyst, and VA support so nothing stalls.

How do you frame cost vs. expected return?

Every plan includes ROI modeling tied to your TAM, lead value, and conversion assumptions. We quantify revenue at risk from weak AI visibility, then project uplift scenarios once citations and answer-block presence stabilize. You’ll see conservative/realistic ranges and the levers that move results.

How do you support and adapt after launch?

We run monthly Claude audits for content scoring and recommendations, track platform shifts across ChatGPT/Claude/Perplexity/Gemini, and update frameworks quarterly. DFY clients get recurring reporting, prompt pack updates, and roadmap adjustments based on measured SOV/citation trends.

Do you offer transparency and guarantees?

We publish our scoring logic (how each ARCHITECT™ element is calculated and weighted), document data sources, and show the exact artifacts shipped (schemas, entities, case study JSON-LD, prompts). Performance guarantees depend on scope and data access; we set milestones and review them openly.

Will this work for my industry?

Yes—our framework is industry-aware. We tailor weighting and artifacts for professional services, healthcare/HIPAA-sensitized categories, and local SMBs. Your entity map, schema set, and Q&A clusters reflect your vertical’s language and buyer questions.

Our Client Partners

LogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogoLogo